Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix 17050actions warning message #17098

Closed
wants to merge 3 commits into from
Closed

Conversation

dpy013
Copy link
Contributor

@dpy013 dpy013 commented Sep 2, 2024

Link to issue number:

fix #17050

Summary of the issue:

Fix actions The following actions use a deprecated Node.js version and will be forced to run on node20: actions/checkout@v3, actions/setup-python@v4. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/

Description of user facing changes

No user changes

Description of development approach

Upgrade actions/checkout@v3, actions/setup-python@v4
Upgrade to the latest stable version
to get rid of the warning message above

Testing strategy:

Not yet.

Known issues with pull request:

Not yet.

Code Review Checklist:

  • Documentation:
    • Change log entry
    • User Documentation
    • Developer / Technical Documentation
    • Context sensitive help for GUI changes
  • Testing:
    • Unit tests
    • System (end to end) tests
    • Manual testing
  • UX of all users considered:
    • Speech
    • Braille
    • Low Vision
    • Different web browsers
    • Localization in other languages / culture than English
  • API is compatible with existing add-ons.
  • Security precautions taken.

Summary by CodeRabbit

  • New Features

    • Introduced an automated workflow to update English user documentation and its translations.
    • Added functionality to generate Markdown files from localized XLIFF files, improving support for non-English languages.
    • Launched a command-line utility for managing markdown translations, including generating, updating, and translating files.
  • Bug Fixes

    • Enhanced consistency checks between markdown and XLIFF files to ensure accuracy in translations.
  • Tests

    • Implemented a comprehensive suite of unit tests for the markdown translation module to validate its functionalities.

michaelDCurran and others added 2 commits September 2, 2024 13:32
Currently user documentation such as the user guide are made translatable via a custom (and very old) translation system hosted by NV Access. For many reasons we need to move away from this old system to something more mainstream and maintainable. We have already successfully moved translation of NVDA interface messages to Crowdin, and we should do the same for the user guide and other documentation.

Description of development approach
• Added markdownTranslate.py, which contains several commands for generating and updating xliff files from markdown files. These xliff files can then be uploaded to Crowdin for translation, and eventually downloaded again and converted back to markdown files. Commands include: 
◦ generateXliff: to generate an xliff file from a markdown file. Firstly a 'skeleton' of the markdown file is produced which is all the structure of a markdown file, but the translatable content on each line has been replaced by a special translation ID. Lines such as blank lines, hidden header rows, or table header separator lines are included in the skeleton in tact and are not available for translation. The xliff file is then produced, which contains one translatable string per translation unit, keyed by its respective translation ID. Each unit also contains translator notes to aide in translation, such as the line number, and any prefix or suffix markdown structure. E.g. a heading might have a prefix of ### and a suffix of {#Intro}. The skeleton is also embedded into the xliff file so that it is possible to update the xliff file keeping existing translation IDs, and or generate the existing markdown file from the xliff file.
◦ generateMarkdown: Given an xliff file, the original markdown file is reproduced from the embedded skeleton, using either the translated or source strings from the xliff file, depending on whether you want a translated or untranslated markdown file.
◦ updateXliff: to update an existing xliff file with changes from a markdown file, ensuring that IDs of existing translatable strings are kept in tact. This command extracts the skeleton from the xliff file, makes a diff of the old and new markdown files, then applies this diff to the skeleton file I.e. removes skeleton lines that were removed from the markdown file, and adds skeleton lines (with new IDs) for lines that are newly added to the markdown file. All existing lines stay as is, keeping their existing translation IDs. Finally a new xliff file is generated from the up to date markdown file and skeleton, resulting in an xliff file that contains all translatable strings from the new markdown file, but reusing translation IDs for existing strings.
◦ translateXliff: given an xliff file, and a pretranslated markdown file that matches the skeleton, a new xliff file is produced containing translations for all strings.
◦ pretranslateAllPossibleLangs: this walks the NVDA user_docs directory, and for each language, pretranslates the English xliff file using the existing pretranslated markdown file from the old translation system (if it matches the skeleton exactly) producing a translated xliff file that can be uploaded to Crowdin to bring an existing translation up to where it was in the old system.
• Added a generated xliff file for the current English user guide markdown file. Note that this has been uploaded to Crowdin for translation.
• Added a GitHub action that runs on the beta branch if English userGuide.md changes. The action regenerates the original markdown file from the current English user guide xliff, then updates the xliff file based on the changes from the original markdown file to the current markdown file. This xliff file is then uploaded to Crowdin, and also committed and pushed to beta.
The following actions use a deprecated Node.js version and will be forced to run on node20: actions/checkout@v3, actions/setup-python@v4. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
error message
@dpy013 dpy013 requested a review from a team as a code owner September 2, 2024 05:09
@dpy013 dpy013 requested a review from SaschaCowley September 2, 2024 05:09
Copy link
Contributor

coderabbitai bot commented Sep 2, 2024

Walkthrough

The changes introduce a new GitHub Actions workflow for automating updates to English user documentation and its XLIFF translation files. The markdownTranslate.py script is added to manage the generation, updating, and translation of markdown and XLIFF files. A suite of unit tests for the markdownTranslate module is also created to ensure functionality. The modifications aim to streamline the translation process for the user guide, moving it to a more maintainable system.

Changes

Files Change Summary
.github/workflows/regenerate_english_userDocs_translation_source.yml New workflow for updating English user documentation and XLIFF files based on Markdown changes.
sconstruct Added functionality to generate Markdown files from localized XLIFF files.
tests/unit/test_markdownTranslate.py Introduced unit tests for the markdownTranslate module to validate its functionalities.
user_docs/markdownTranslate.py New utility script for managing markdown translations, including XLIFF generation and updates.

Assessment against linked issues

Objective Addressed Explanation
Add ability to translate the user guide on Crowdin (#[17050])
Move translation of user documentation to Crowdin (#[17050])
Automate updates for markdown and XLIFF files (#[17050])
Ensure existing translations are preserved (#[17050])

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Tip

Early access features: enabled

We are currently testing the following features in early access:

  • Anthropic claude-3-5-sonnet for code reviews: Anthropic claims that the new Claude model has stronger code understanding and code generation capabilities than their previous models. Note: Our default code review model was also updated late last week. Please compare the quality of the reviews between the two models by toggling the early access feature.

Note:

  • You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file.
  • Please join our Discord Community to provide feedback and report issues on the discussion post.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Outside diff range, codebase verification and nitpick comments (4)
tests/unit/test_markdownTranslate.py (1)

15-40: LGTM: Test class setup is well-structured.

The TestMarkdownTranslate class is properly set up with setUp and tearDown methods for managing test resources. The helper method runMarkdownTranslateCommand is a good practice for reducing code duplication.

Consider adding type hints to the runMarkdownTranslateCommand method for improved readability:

	def runMarkdownTranslateCommand(self, description: str, args: list[str]) -> None:
sconstruct (1)

336-346: Integration with existing build process

While the new functionality to generate localized Markdown files from XLIFF is well-implemented, there are some considerations regarding its integration with the existing build process:

  1. The existing process generates HTML files from Markdown files (lines 347-357). Consider updating this process to include the newly generated localized Markdown files.

  2. The userGuide and keyCommands targets (lines 391-402) currently only process the English versions. You might want to extend these to handle localized versions as well.

To fully integrate this new functionality, consider the following steps:

  1. Update the HTML generation process to include the newly created localized Markdown files.
  2. Modify the userGuide and keyCommands targets to generate localized versions of these documents.
  3. Ensure that the distribution package (dist target) includes the localized documentation.

These changes would ensure that the new localized content is fully utilized in the build and distribution process.

user_docs/markdownTranslate.py (2)

6-20: Consider grouping imports for better readability.

The imports are comprehensive and appropriate for the functionality of the script. However, consider grouping them into standard library imports, third-party imports, and local imports for better readability.

Here's a suggested regrouping:

# Standard library imports
import argparse
import contextlib
import os
import re
import subprocess
import tempfile
import uuid
from dataclasses import dataclass
from itertools import zip_longest
from typing import Generator
from xml.sax.saxutils import escape as xmlEscape
from xml.sax.saxutils import unescape as xmlUnescape

# Third-party imports
import difflib
import lxml.etree

292-326: Consider adding progress reporting to updateXliff function.

The updateXliff function performs several operations but doesn't provide progress updates. Consider adding progress reporting to give users feedback on the current step being executed.

You could add print statements or use a progress bar library like tqdm to show progress for each step.

Comment on lines +30 to +53
- name: update xliff files
shell: pwsh
run: |
# for any English markdown files changed within the commits of this push,
# update the corresponding xliff file (if one exists) to reflect the current markdown file,
# keeping existing translation IDs in tact.
$ErrorActionPreference = 'Stop'
$changedFiles = git diff --name-only ${{github.event.before}}.. -- user_docs/en/*.md
foreach ($file in $changedFiles) {
Write-Host "$file has changed"
$baseName = [System.IO.Path]::GetFileNameWithoutExtension($file)
$xliff = "user_docs/en/$baseName.xliff"
$tempXliff = "user_docs/en/$baseName.xliff.temp"
$markdown = $file
if (Test-Path $xliff) {
Write-Host "Updating $xliff with changes from $markdown"
python user_docs/markdownTranslate.py updateXliff -x $xliff -m $file -o $tempXliff
Write-Host "Renaming $tempXliff to $xliff"
move-item -Path $tempXliff -Destination $xliff -Force
} else {
Write-Host "Ignoring $markdown as it does not have a corresponding xliff file"
}
}
if: success()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

XLIFF update process is well-implemented, but could be optimized.

The script efficiently updates XLIFF files for changed markdown files. However, there's room for improvement:

  1. Consider using a more efficient git command to get changed files.
  2. The error handling could be more robust.

Consider the following optimizations:

  1. Replace the git command with:
-$changedFiles = git diff --name-only ${{github.event.before}}.. -- user_docs/en/*.md
+$changedFiles = git diff --name-only ${{github.event.before}} ${{github.sha}} -- user_docs/en/*.md
  1. Add error handling for the Python script execution:
 python user_docs/markdownTranslate.py updateXliff -x $xliff -m $file -o $tempXliff
+if ($LASTEXITCODE -ne 0) {
+    Write-Error "Failed to update XLIFF file: $xliff"
+    exit 1
+}

Comment on lines +55 to +74
- name: Commit and Push changes
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
$ErrorActionPreference = 'Stop'
git config --local user.name "GitHub Actions"
git config --local user.email "[email protected]"
git remote set-url origin https://x-access-token:${GITHUB_TOKEN}@github.com/${{ github.repository }}.git
$filesChanged = git diff --name-only -- *.xliff
if ($filesChanged) {
Write-Host "xliff files were changed. Committing and pushing changes."
foreach ($file in $filesChanged) {
git add $file
git commit -m "Update $file"
}
git push origin HEAD
} else {
Write-Host "No xliff files were changed. Skipping commit and push."
}
if: success()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commit and push process is well-implemented, but could be more efficient.

The process correctly commits and pushes changes for modified XLIFF files. However, there's an opportunity to optimize the git operations.

Consider the following optimizations:

  1. Use a single commit for all changed files:
-foreach ($file in $filesChanged) {
-  git add $file
-  git commit -m "Update $file"
-}
+git add *.xliff
+git commit -m "Update XLIFF files"
  1. Use git diff --quiet for a more efficient check:
-$filesChanged = git diff --name-only -- *.xliff
-if ($filesChanged) {
+if (-not (git diff --quiet -- *.xliff)) {

Comment on lines +76 to +94
- name: Crowdin upload
# This step must only be run after successfully pushing changes to the repository.
# Otherwise if the push fails, subsequent runs may cause new translation IDs to be created,
# which will cause needless retranslation of existing strings.
env:
crowdinProjectID: ${{ vars.CROWDIN_PROJECT_ID }}
crowdinAuthToken: ${{ secrets.CROWDIN_AUTH_TOKEN }}
run: |
# Check if we changed userGuide.xliff in this action.
# If we did, upload it to Crowdin.
$ErrorActionPreference = 'Stop'
$changed = git diff --name-only ${{GITHUB.SHA}}.. -- user_docs/en/userGuide.xliff
if ($changed) {
Write-Host "Uploading userGuide.xliff to Crowdin"
# 18 is the file ID for userGuide.xliff in Crowdin.
python appVeyor/crowdinSync.py uploadSourceFile 18 user_docs/en/userguide.xliff
} else {
Write-Host "Not uploading userGuide.xliff to Crowdin as it has not changed"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crowdin upload process is correctly implemented, but could be improved.

The process checks for changes in userGuide.xliff and uploads it to Crowdin if changed. However, there are a few points to consider:

  1. The comment mentions a file ID (18) for userGuide.xliff in Crowdin. This might be better stored as a variable or environment variable for easier maintenance.
  2. The git command to check for changes could be more efficient.
  3. Error handling for the Python script execution could be improved.

Consider the following improvements:

  1. Store the Crowdin file ID as an environment variable:
 env:
   crowdinProjectID: ${{ vars.CROWDIN_PROJECT_ID }}
   crowdinAuthToken: ${{ secrets.CROWDIN_AUTH_TOKEN }}
+  crowdinUserGuideFileID: 18
  1. Use a more efficient git command:
-$changed = git diff --name-only ${{GITHUB.SHA}}.. -- user_docs/en/userGuide.xliff
+$changed = git diff --quiet ${{GITHUB.SHA}} HEAD -- user_docs/en/userGuide.xliff
+if ($LASTEXITCODE -eq 1) {
  1. Add error handling for the Python script:
 python appVeyor/crowdinSync.py uploadSourceFile 18 user_docs/en/userguide.xliff
+if ($LASTEXITCODE -ne 0) {
+    Write-Error "Failed to upload userGuide.xliff to Crowdin"
+    exit 1
+}

Comment on lines +41 to +135
def test_markdownTranslate(self):
outDir = self.outDir.name
testDir = self.testDir
self.runMarkdownTranslateCommand(
description="Generate an xliff file from the English 2024.2 user guide markdown file",
args=[
"generateXliff",
"-m",
os.path.join(testDir, "en_2024.2_userGuide.md"),
"-o",
os.path.join(outDir, "en_2024.2_userGuide.xliff"),
],
)
self.runMarkdownTranslateCommand(
description="Regenerate the 2024.2 markdown file from the generated 2024.2 xliff file",
args=[
"generateMarkdown",
"-x",
os.path.join(outDir, "en_2024.2_userGuide.xliff"),
"-o",
os.path.join(outDir, "rebuilt_en_2024.2_userGuide.md"),
"-u",
],
)
self.runMarkdownTranslateCommand(
description="Ensure the regenerated 2024.2 markdown file matches the original 2024.2 markdown file",
args=[
"ensureMarkdownFilesMatch",
os.path.join(outDir, "rebuilt_en_2024.2_userGuide.md"),
os.path.join(testDir, "en_2024.2_userGuide.md"),
],
)
self.runMarkdownTranslateCommand(
description="Update the 2024.2 xliff file with the changes between the English 2024.2 and 2024.3beta6 user guide markdown files",
args=[
"updateXliff",
"-x",
os.path.join(outDir, "en_2024.2_userGuide.xliff"),
"-m",
os.path.join(testDir, "en_2024.3beta6_userGuide.md"),
"-o",
os.path.join(outDir, "en_2024.3beta6_userGuide.xliff"),
],
)
self.runMarkdownTranslateCommand(
description="Regenerate the 2024.3beta6 markdown file from the updated xliff file",
args=[
"generateMarkdown",
"-x",
os.path.join(outDir, "en_2024.3beta6_userGuide.xliff"),
"-o",
os.path.join(outDir, "rebuilt_en_2024.3beta6_userGuide.md"),
"-u",
],
)
self.runMarkdownTranslateCommand(
description="Ensure the regenerated 2024.3beta6 markdown file matches the original 2024.3beta6 markdown",
args=[
"ensureMarkdownFilesMatch",
os.path.join(outDir, "rebuilt_en_2024.3beta6_userGuide.md"),
os.path.join(testDir, "en_2024.3beta6_userGuide.md"),
],
)
self.runMarkdownTranslateCommand(
description="Translate the 2024.3beta6 xliff file to French using the existing pretranslated French 2024.3beta6 user guide markdown file",
args=[
"translateXliff",
"-x",
os.path.join(outDir, "en_2024.3beta6_userGuide.xliff"),
"-l",
"fr",
"-p",
os.path.join(testDir, "fr_pretranslated_2024.3beta6_userGuide.md"),
"-o",
os.path.join(outDir, "fr_2024.3beta6_userGuide.xliff"),
],
)
self.runMarkdownTranslateCommand(
description="Regenerate the French 2024.3beta6 user guide markdown file from the French translated 2024.3beta6 xliff file",
args=[
"generateMarkdown",
"-x",
os.path.join(outDir, "fr_2024.3beta6_userGuide.xliff"),
"-o",
os.path.join(outDir, "fr_2024.3beta6_userGuide.md"),
],
)
self.runMarkdownTranslateCommand(
description="Ensure the regenerated French 2024.3beta6 user guide markdown file matches the original French 2024.3beta6 user guide markdown file",
args=[
"ensureMarkdownFilesMatch",
os.path.join(outDir, "fr_2024.3beta6_userGuide.md"),
os.path.join(testDir, "fr_pretranslated_2024.3beta6_userGuide.md"),
],
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider splitting the test method into smaller, more focused test cases.

While the test_markdownTranslate method is comprehensive and covers various scenarios, it might be beneficial to split it into smaller, more focused test cases. This approach would improve maintainability and make it easier to identify which specific functionality fails if an error occurs.

Consider refactoring the test method into separate test cases, each focusing on a specific functionality:

  1. test_generate_xliff_from_markdown
  2. test_regenerate_markdown_from_xliff
  3. test_update_xliff_with_changes
  4. test_translate_xliff_to_french

This refactoring would make the tests more modular and easier to maintain. For example:

def test_generate_xliff_from_markdown(self):
    outDir = self.outDir.name
    testDir = self.testDir
    self.runMarkdownTranslateCommand(
        description="Generate an xliff file from the English 2024.2 user guide markdown file",
        args=[
            "generateXliff",
            "-m",
            os.path.join(testDir, "en_2024.2_userGuide.md"),
            "-o",
            os.path.join(outDir, "en_2024.2_userGuide.xliff"),
        ],
    )
    # Add assertions to verify the generated XLIFF file

def test_regenerate_markdown_from_xliff(self):
    # Similar structure for testing markdown regeneration
    # ...

# Additional test methods for other functionalities

This approach would make it easier to identify which specific functionality fails if an error occurs and would improve the overall maintainability of the test suite.

Comment on lines +31 to +39
def runMarkdownTranslateCommand(self, description: str, args: list[str]):
failed = False
try:
subprocess.run([sys.executable, self.markdownTranslateScriptPath, *args], check=True)
except subprocess.CalledProcessError:
failed = True
if failed:
message = f"Failed when trying to {description} with command: {' '.join(args)}"
self.fail(message)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enhance error handling and logging in runMarkdownTranslateCommand.

The current error handling captures subprocess errors, but it doesn't provide detailed information about the failure. This could make debugging difficult, especially for complex test scenarios.

Consider enhancing the error handling to provide more detailed information:

def runMarkdownTranslateCommand(self, description: str, args: list[str]) -> None:
    try:
        result = subprocess.run(
            [sys.executable, self.markdownTranslateScriptPath, *args],
            check=True,
            capture_output=True,
            text=True
        )
    except subprocess.CalledProcessError as e:
        error_message = (
            f"Failed when trying to {description} with command: {' '.join(args)}\n"
            f"Exit code: {e.returncode}\n"
            f"STDOUT: {e.stdout}\n"
            f"STDERR: {e.stderr}"
        )
        self.fail(error_message)

This enhancement captures both stdout and stderr, providing more context for debugging if a command fails. It also includes the exit code in the error message.

Comment on lines +83 to +108
def skeletonizeLine(mdLine: str) -> str | None:
prefix = ""
suffix = ""
if (
mdLine.isspace()
or mdLine.strip() == "[TOC]"
or re_hiddenHeaderRow.match(mdLine)
or re_postTableHeaderLine.match(mdLine)
):
return None
elif m := re_heading.match(mdLine):
prefix, content, suffix = m.groups()
elif m := re_bullet.match(mdLine):
prefix, content = m.groups()
elif m := re_number.match(mdLine):
prefix, content = m.groups()
elif m := re_tableRow.match(mdLine):
prefix, content, suffix = m.groups()
elif m := re_kcTitle.match(mdLine):
prefix, content, suffix = m.groups()
elif m := re_kcSettingsSection.match(mdLine):
prefix, content, suffix = m.groups()
elif re_comment.match(mdLine):
return None
ID = str(uuid.uuid4())
return f"{prefix}$(ID:{ID}){suffix}\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider simplifying the skeletonizeLine function.

The skeletonizeLine function uses multiple regular expressions and conditional statements. Consider refactoring it to use a dictionary of regular expressions and their corresponding actions to make it more maintainable.

Here's a suggested refactoring:

def skeletonizeLine(mdLine: str) -> str | None:
    patterns = {
        re_heading: lambda m: (m.group(1), m.group(2), m.group(3)),
        re_bullet: lambda m: (m.group(1), m.group(2), ""),
        re_number: lambda m: (m.group(1), m.group(2), ""),
        re_tableRow: lambda m: (m.group(1), m.group(2), m.group(3)),
        re_kcTitle: lambda m: (m.group(1), m.group(2), m.group(3)),
        re_kcSettingsSection: lambda m: (m.group(1), m.group(2), m.group(3)),
    }
    
    if mdLine.isspace() or mdLine.strip() == "[TOC]" or re_hiddenHeaderRow.match(mdLine) or re_postTableHeaderLine.match(mdLine):
        return None
    
    for pattern, action in patterns.items():
        if m := pattern.match(mdLine):
            prefix, content, suffix = action(m)
            ID = str(uuid.uuid4())
            return f"{prefix}$(ID:{ID}){suffix}\n"
    
    if re_comment.match(mdLine):
        return None
    
    ID = str(uuid.uuid4())
    return f"$(ID:{ID})\n"

Comment on lines +486 to +541
def pretranslateAllPossibleLanguages(langsDir: str, mdBaseName: str):
# This function walks through all language directories in the given directory, skipping en (English) and translates the English xlif and skel file along with the lang's pretranslated md file
enXliffPath = os.path.join(langsDir, "en", f"{mdBaseName}.xliff")
if not os.path.exists(enXliffPath):
raise ValueError(f"English xliff file {enXliffPath} does not exist")
allLangs = set()
succeededLangs = set()
skippedLangs = set()
for langDir in os.listdir(langsDir):
if langDir == "en":
continue
langDirPath = os.path.join(langsDir, langDir)
if not os.path.isdir(langDirPath):
continue
langPretranslatedMdPath = os.path.join(langDirPath, f"{mdBaseName}.md")
if not os.path.exists(langPretranslatedMdPath):
continue
allLangs.add(langDir)
langXliffPath = os.path.join(langDirPath, f"{mdBaseName}.xliff")
if os.path.exists(langXliffPath):
print(f"Skipping {langDir} as the xliff file already exists")
skippedLangs.add(langDir)
continue
try:
translateXliff(
xliffPath=enXliffPath,
lang=langDir,
pretranslatedMdPath=langPretranslatedMdPath,
outputPath=langXliffPath,
allowBadAnchors=True,
)
except Exception as e:
print(f"Failed to translate {langDir}: {e}")
continue
rebuiltLangMdPath = os.path.join(langDirPath, f"rebuilt_{mdBaseName}.md")
try:
generateMarkdown(
xliffPath=langXliffPath,
outputPath=rebuiltLangMdPath,
)
except Exception as e:
print(f"Failed to rebuild {langDir} markdown: {e}")
os.remove(langXliffPath)
continue
try:
ensureMarkdownFilesMatch(rebuiltLangMdPath, langPretranslatedMdPath, allowBadAnchors=True)
except Exception as e:
print(f"Rebuilt {langDir} markdown does not match pretranslated markdown: {e}")
os.remove(langXliffPath)
continue
os.remove(rebuiltLangMdPath)
print(f"Successfully pretranslated {langDir}")
succeededLangs.add(langDir)
if len(skippedLangs) > 0:
print(f"Skipped {len(skippedLangs)} languages already pretranslated.")
print(f"Pretranslated {len(succeededLangs)} out of {len(allLangs) - len(skippedLangs)} languages.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding parallel processing to pretranslateAllPossibleLanguages function.

The pretranslateAllPossibleLanguages function processes languages sequentially. For improved performance, consider using parallel processing to handle multiple languages simultaneously.

You could use the concurrent.futures module to implement parallel processing:

import concurrent.futures

def pretranslateAllPossibleLanguages(langsDir: str, mdBaseName: str):
    # ... (existing code)

    def process_language(langDir):
        # ... (existing code for processing a single language)

    with concurrent.futures.ThreadPoolExecutor() as executor:
        futures = [executor.submit(process_language, langDir) for langDir in allLangs if langDir != "en"]
        for future in concurrent.futures.as_completed(futures):
            try:
                future.result()
            except Exception as e:
                print(f"Failed to process language: {e}")

    # ... (existing code for printing results)

@dpy013 dpy013 closed this Sep 2, 2024
@dpy013 dpy013 deleted the fix-actions branch September 2, 2024 05:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants